perm filename DISCUS[F79,JMC]3 blob
sn#525122 filedate 1980-07-19 generic text, type C, neo UTF8
COMMENT ā VALID 00002 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 Small Intentional Systems - JMC - Dec 19, 1979
C00014 ENDMK
Cā;
Small Intentional Systems - JMC - Dec 19, 1979
Discussing small intentional systems has the following rationale:
1. Ascribing intentional qualities to systems that are simple enough
to be fully understood as information processing systems, e.g. programs,
allows us to understand the relations between intentionality and the
underlying information process.
2. By analogy to the number system, we are looking at the 0, 1 and 2
of the set of intentional systems. While no-one writes articles about
0, and one would not introduce cardinal numbers if one only intended
to count the null set, the regularity and comprehensibility of the
number system is enhanced by beginning with 0 even though admitting
0 as a number was a late development historically.
Let me also point out that the criteria for an intentional system
can be looked at from two points of view. There has been a problem of
convincing people of a behaviorist cast of mind that intentional systems
are needed at all to explain psychological phenomena or to express the
information people have (or to build intelligent programs). For this
purpose, it is necessary to exhibit systems that are very difficult to
understand or describe without intentional ascriptions. Zenon Pylyshyn's
recent paper seems to be especially concerned with making the case for
intentionality to skeptics.
However, having once decided to use intentional ascriptions, there
is no need to limit them to circumstances in which their use is
unavoidable. Indeed systems that admit both intentional and other
descriptions may be most illuminating at the beginning of the study of
intentional systems.
Here are some candidates for small intentional systems.
1. One trivial system is a book or other collection of declarative
sentences in a natural or logical language. Relative to the language, we
may want to consider it as believing the sentences. Moreover, if the text
is at all long, cryptographic considerations make it likely that it
probably has no other interpretation as a collection of sentences.
However, it doesn't change with time, and it isn't clear that there is
much to be said about purely static intentional systems. On the other
hand, a dynamic system may get into a static state, and there might be a
discontinuity in the theory if we had to regard it as thereby losing its
intentional properties. It seems best to reserve judgment about books and
see how they fit into a later theory when and if we get one.
2. Next we can imagine a reasoning process that generates new
sentences from old ones by deduction or even conjecture but has no
inputs or outputs. Moreover, we may suppose that it uses a British
Museum algorithm or other non-goal-directed algorithm. From the outside
we may consider its "beliefs", their truth, the correctness of its
reasoning and whether it will ever reach a goal.
3. Simple goal-directed systems, e.g. thermostats and more
elaborate temperature control systems. It seems clear that there
is no point in ascribing self-consciousness, i.e. beliefs about itself
to a thermostat. A heating engineer or thermostat repairman has the
option of a design stance towards the thermostat unless the system
is very complicated, but the householder may not have the information
to take other than an intentional stance. It would be a very awkward
theory if the householder was allowed the intentional stance but would
be forbidden it if he subsequently studied the design.
4. There are small self-conscious systems. A system that deals
with physical objects would have a very strange view of the world if
its own "body" were not considered a physical object or if nothing were
said about the relation between the actions of the system and the
state of that object and nothing was said about the ubiquity of that
object in all situations. This would not require it to recognize
"other minds" or to regard its own goals or beliefs as objects.
5. A higher level in an intentional hierarchy occurs when
beliefs and goals are themselves considered as objects about which
there can be beliefs and goals. Quine's doctrine that the
ontology comprises the domain of the bound variables seems convenient
for describing the intentional hierarchy.
Bob Moore described systems that have beliefs explicitly
represented as sentences in a given language. As is apparent and
probably Bob would agree, it is possible to ascribe beliefs to
systems where they aren't represented as sentences. It might turn
out that a system in which sentences are explicitly represented
could best be regarded has having beliefs different from those
sentences. However, I believe that case Bob considered is an
important one. Much can be gained from designing systems whose
beliefs are explicitly represented.
As I understand them, both Zenon and John Haugeland have
reservations about small intentional systems. I would like to
understand what they are.
I don't intend to alter this page significantly, but the
file SMALL.F79[F79,JMC] will eventually contain considerations on
small intentional systems that will be updated on the basis of
my further thinking and whatever discussion of these issues might
ensue.